Quick Index
Essence of Big-O


In the following steps we attempt to explain the essence (core) of Big-O?

Empty Structure
We start off with an empty structure. The structure could be any type -- e.g. dynamic array, linked list, binary tree, etc.
Grow a Little
We pretend to grow the structure size to five elements.
Grow More!
We pretend to grow the structure size to one thousand elements.
Grow Towards Infinity
Finally, we pretend to grow the structure size towards an infinite number of elements.
The Essence of Big-O
Big-O is about answering the question:

Does a given algorithm work harder as size grows?
🔨 👷🏿‍♀

Overview


Now that we have an idea of the essence of Big-O let's dig into the nuts & bolts.

A video that accompanies the page is here....

Concepts



Lingo


Common Lingo:


Interchangeable Terms:


Algorithm Space Requirements:


What is Big-O Notation?


Assume we have:


Big-O analysis basically consists of two steps:


We then use Big-O notation to classify the change in "work" (relative to growth)

Examples will help.

Algorithm Work Effort -- Examples and Quick Quiz


In the following examples, assume we are adding methods (algorithms) to a dynamic array.

Our context (method summary listing), available to us, is shown.
Method Summaries:

Here is an algorithm for the "first" method.

The algorithm gets and returns the first element.

Does the algorithm work harder as it grows?



No, size does not change the work (effort) expended by this algorithm.

We can see that the work does not change as size changes -- size is not a "player" here.
first() {
	//Return first element
	//Return null for empty case
	if (this.isEmpty())
		return null
	return this.get(0)
}
Let's say we are doing Big-O analysis on the algo for a "linear search".

Let us pretend the is growing in size towards infinity.

Does the algorithm work harder as it grows?



Certainly yes.

The structure size [e.g. "this.size()"] controls how many times we could loop.

Bottom line: growth in "size" makes this algo work harder.

Also remember we assume Worst Case



linearSearch(anElem) {
	//Return matching element
	//or null if no match
	let i = 0
	while (i < this.size()) {
		let nextElem = this.get(i)
		if (nextElem.equals(anElem))
			return nextElem
		i++
	}
	return null
}


Number of Operations



Big-O Levels and T-Shirts


One thousand people would probably have one thousand (if precisely measured) different shirt sizes. But when we choose tee shirts we broadly group this into something like (Small, Medium, Large).

Big-O is similar. We do not make exact measurements or operation counts. What we do is determine the grouping (performance level) that a given algorithm belongs to.

Notations



Notations Overview


The outcome of our Big-O analysis will assign an algorithm a Big-O "level" (notation). Let us look at examples.

O(1) - Constant


The O(1) - Constant notation means that an algorithm work (effort) does not change as structure size grows.

Note that the "1" has no special meaning -- simply think "constant" for "1".

Here is an algorithm we recently looked at.

What is the Big-O for the algo shown?



Answer: O(1)

The effort of this simple algorithm is unchanged as the structure grows in size.
first() {
	//Return first element
	//Return null for empty case
	if (this.isEmpty())
		return null
	return this.get(0)
}
Here is another algorithm.

What is the Big-O for the algo shown?



Answer: O(1)

The effort of this algorithm is also unchanged as the structure grows in size.
last() {
	//Return last element
	//Return null for empty case
	if (this.isEmpty())
		return null
	return this.get(this.size() - 1)
}


O(n) - linear


Question: Do we have a simple loop (not nested) over all the structure elements?
Answer: If yes, then we have O(n)

Note that we do not think of "n" as a measurement -- for "n" we think "linear".

Linear Search
Here is an algorithm we recently looked at.

What is the Big-O for the algo shown?



Answer: O(n)

This algorithm will iterate (loop) more times (i.e. "work harder) as structure size grows.

And the "work" increases linearly relative to structure size "n".



linearSearch(anElem) {
	//Return matching element
	//or null if no match
	let i = 0
	while (i < this.size()) {
		let nextElem = this.get(i)
		if (nextElem.equals(anElem))
			return nextElem
		i++
	}
	return null
}
Sum Every Third Integer
Assume we have a dynamic array of integers here.

We are iterating on less elements (every third element).

What is the Big-O for the algo shown?



Answer: O(n)

This algorithm will also iterate (loop) more times (i.e. "work harder) as structure size grows.

And the algorithm "work" also increases linearly with struture size growth.

The effort (iterating on every third) is less than the "linear search" example (iterating on every element) -- but remember we are categorizing broadly.
sum() {
	//Return matching element
	//or null if no match
	let i = 0
	let sum = 0
	while (i < this.size()) {
		let nextElem = this.get(i)
		sum += nextElem
		i += 3
	}
	return sum
}


O(n2) - quadratic


Example
A variation of O(n) here. We have a nested loop (a loop in a loop). Hopefully much rarer in our algorithms.

This is called O(n2).

A little math...
nestedLoopSum() {
	//Compute sums in a nested loop
	let i = 0
	let sum = 0
	while (i < this.size()) {
		let j = 0
		while (j < this.size()) {
			let nextElem = this.get(i)
			sum += nextElem
			j++
		}
		i++
	}
	return sum
}


O(log n) - logarithmic


Example
Again we assume we have collection of integers.

Our iteration count will be logarithmically related to the structure size n.

Here we start var i at (n-1) and find the next i by dividing by 2 for each loop, and end when i equals 0 (somewhat similar to a binary search approach).

This is called O(log n).

A little math...
crazyLogSum() {
	//Compute sums during a logarithmic effort
	let i = this.size() - 1
	let sum = 0
	while (i > 0) {
		let nextElem = this.get(i)
		sum += nextElem
		i = i / 2
	}
	return sum
}
log n vs n
Just for fun.

As shown here, log n is much smaller than n.

log n has only 50 operations for a structure size of one quadrillion.
n = 1000000000000000
//log base 2
log(n) = 50


O(n log n) - logarithmic nested in linear


Example
This is a rarer case where we have a O(log n) operation nested inside an outer O(n) operation.

This is called O(n log n).
Outer loop at O(n)
	Inner operation at O(log n)

Big-O Practical Tips



Nested or Called Methods


Nested Call Question
We are analyzing the method "foo". It has no visible iteration. It just calls the method/task "linearSearch" which has O(n).

What is the Big-O of the method "foo"?



Answer: O(n)

We have to include "nested" (sub) tasks in our analysis. Otherwise, it would be like "hiding" steps from the analysis.
foo(anElem) {
	return this.linearSearch(anElem);
}
Adding Terms Question
This algorithm has two O(1) operations (highlighted) and one O(n) operation (the loop).

What is the Big-O level for the algorithm?



Answer: O(n)

The smaller terms "wash away". An explanation follows.
linearSearch(anElem) {
	//Return matching element
	//or null if no match
	let i = 0
	while (i < this.size()) {
		let nextElem = this.get(i)
		if (nextElem.equals(anElem))
			return nextElem
		i++
	}
	return null
}


Explanation for Adding Terms
For the algo shown above, we would start off with this analysis:

O(n) + O(1) + O(1)


We would recall our discussion about levels highlighting that Big-O terms are wide-band (broad) categories. Therefore, smaller terms "wash away" leaving us with:

O(n)


Generally we do not need to add Big-O terms. Just use the most dominant term (largest growth rate). E.g. O(n) performs poorer than O(1). Thus O(n) governs.

A math angle if interested...

The wiki gives a nice explanation...


Multiplying Terms


Multiplying Terms Question
What is the Big-O level for this algorithm that calls two tasks each with O(n)?



Answer: O(n)

An explanation follows.
foo(anElem) {
	this.linearSearch(anElem);
	this.linearSearch(anElem);
}


Explanation for Multiplying Terms
For the algo shown above, we would start off with this analysis:

O(n) + O(n)
= 2 * O(n)


Remember, we are only concerned with the effect structure size growth has on the algo. Thus any factor (like "2" here) that is not effected by growth may be eliminated.

2 * O(n)
= O(n)


Wiki does a nice job explaining this...


Is O(1) Faster Than O(n)?


Note that we are using O(1) and O(n) here for example purposes. The same would apply to other Big-O levels.

Question
If we have two algorithms with one being O(1) and the other being O(n), can we conclude that O(1) will run faster?


No, remember Big-O notation only indicates how an algorithm is affected by structure growth.

As savvy coders we'll use Big-O along with other techniques such as performance profiling. And of course our own common sense / judgement.
Explanation by Example
What is an example where O(n) would likely be slower than O(1)?


As savvy coders we know that in some cases user interface (UI) operations can be quite inefficient.

Here is a case where algorithm #1 "O(1)" would likely be slower than algorithm #2 "O(n)" even though it has a better Big-O level

Given data structure "structure"

Algorithm #1: O(1)
Do a complex user interface (UI) operation that does not depend on structure size
Assume UI takes about ten minutes to complete
When done, return first element in "structure"

Algorithm #2: O(n)
Do Linear Search of "structure"

Analysis Types



Worst Case Analysis


We've been using a worst case analysis so far. Why?

Back to a linear search of a list of size "n". In the worst case we'll scan the entire list -- performance O(n).

Why be so pessimistic?


However, there are cases where we want more of an average case analysis. We'll look also.

Average-Case


We know that the worst-case assumption protects us from being overly optimistic. But at times, it will be overly pessimistic.

We might consider an "average-case" approach. The "average-case" analysis is pretty rare for a couple reasons:

1. Input Data Assumptions Limit the Algorithm
It makes assumptions about the input data. This means that we would limit our algorithm to a certain input pattern (e.g., random input). Most algorithms do not limit input patterns.
2. Assumptions About Input are Diffuclt
Think about a simple linear search (scan):
  • Worst-case says we scan n elements
  • We might think average-case would say we scan (1/2)n elements
  • However we have ignored failed searches (which would scan all elements). How many searches result in no match. This may not be easy to estimate.

We'll next look a more common technique called "amortized analysis".

Amortized Analysis


An amortized analysis is especially useful when an algorithm has a worst-case cost only on rare occasions.

Consider the algorithm for "append/add" operation for a dynamic array (resizable array) structure:


Performance of steps:


A worst-case analysis would give this algorithm a big-O level of O(n).

However, we know that the slow "grow" operation is rare. An amortized analysis effectively ignores the rare step thus giving this algorithm a big-O level of O(1)

For a mathematical look at this analysis, baeldung.com does a nice job....

Unlike the average-case approach, this technique does not depend on the input data. It estimates cost over a sequence of operations.

Upper Bound


We might hear the word "upper and lower bounds" in discussions of algorithm performance.

Big-O generally describes an upper bound (asymptotic upper bound).

The use of bounds is a large topic by itself that we will not cover here. If interested, for fun, here is more information...

Summary


Big-O notation describes the affect structure growth has on an algorithm's effort.

It is not a measurement.

Big-O is qualitative not quantitative. Big-O is a broad grouping like choosing a T-shirt ("S", "M", "L").

Identifying iterations/recursion helps the analysis.

Use Big-O with other tools such as performance profilers (tools that make time measurements of running code).

Here are the most common Big-O notations (ordered from best to worst in relative efficiency):


Note: recursion would apply similarly to iterations

There are other less common notations such as O(n3) (hopefully rare).

More Information


More fun here:



Navigation